1×3×300×300
1×96×75×75
1×75×75×96
1×75×75×96
1×75×75×96
1×96×75×75
1×96×75×75
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×384
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
5625×96
5625×96
5625×384
5625×384
5625×384
5625×384
5625×384
1×75×75×384
1×75×75×384
5625×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×1×1×384
1×1×1×384
1×1×1×1
1×1×1×1
1×1×1×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×96
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
5625×384
5625×384
5625×96
5625×96
5625×96
5625×96
5625×96
1×75×75×96
1×75×75×96
5625×96
1×75×75×96
1×75×75×96
1×75×75×96
1×96×75×75
1×96×75×75
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×384
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
5625×96
5625×96
5625×384
5625×384
5625×384
5625×384
5625×384
1×75×75×384
1×75×75×384
5625×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×1×1×384
1×1×1×384
1×1×1×1
1×1×1×1
1×1×1×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×96
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
5625×384
5625×384
5625×96
5625×96
5625×96
5625×96
5625×96
1×75×75×96
1×75×75×96
5625×96
1×75×75×96
1×75×75×96
1×75×75×96
1×96×75×75
1×96×75×75
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×384
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
1×75×75×96
5625×96
5625×96
5625×384
5625×384
5625×384
5625×384
5625×384
1×75×75×384
1×75×75×384
5625×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×1×1×384
1×1×1×384
1×1×1×1
1×1×1×1
1×1×1×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75
1×75×75×1
5625×1
1×75×75×1
5625×96
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
1×75×75×384
5625×384
5625×384
5625×96
5625×96
5625×96
5625×96
5625×96
1×75×75×96
1×75×75×96
5625×96
1×75×75×96
1×75×75×96
1×75×75×96
1×96×75×75
1×192×37×37
1×192×37×37
1×192×37×37
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×768
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1369×192
1369×192
1369×768
1369×768
1369×768
1369×768
1369×768
1×37×37×768
1×37×37×768
1369×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×1×1×768
1×1×1×768
1×1×1×1
1×1×1×1
1×1×1×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×192
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1369×768
1369×768
1369×192
1369×192
1369×192
1369×192
1369×192
1×37×37×192
1×37×37×192
1369×192
1×37×37×192
1×192×37×37
1×192×37×37
1×192×37×37
1×192×37×37
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×768
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1369×192
1369×192
1369×768
1369×768
1369×768
1369×768
1369×768
1×37×37×768
1×37×37×768
1369×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×1×1×768
1×1×1×768
1×1×1×1
1×1×1×1
1×1×1×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×192
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1369×768
1369×768
1369×192
1369×192
1369×192
1369×192
1369×192
1×37×37×192
1×37×37×192
1369×192
1×37×37×192
1×192×37×37
1×192×37×37
1×192×37×37
1×192×37×37
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×768
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1×37×37×192
1369×192
1369×192
1369×768
1369×768
1369×768
1369×768
1369×768
1×37×37×768
1×37×37×768
1369×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×1×1×768
1×1×1×768
1×1×1×1
1×1×1×1
1×1×1×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37
1×37×37×1
1369×1
1×37×37×1
1369×192
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1×37×37×768
1369×768
1369×768
1369×192
1369×192
1369×192
1369×192
1369×192
1×37×37×192
1×37×37×192
1369×192
1×37×37×192
1×192×37×37
1×192×37×37
1×37×37×192
1×37×37×192
1×192×37×37
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×1536
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
1×18×18×384
324×384
324×384
324×1536
324×1536
324×1536
324×1536
324×1536
1×18×18×1536
1×18×18×1536
324×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×1×1×1536
1×1×1×1536
1×1×1×1
1×1×1×1
1×1×1×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18
1×18×18×1
324×1
1×18×18×1
324×384
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
1×18×18×1536
324×1536
324×1536
324×384
324×384
324×384
324×384
324×384
1×18×18×384
1×18×18×384
324×384
1×18×18×384
1×384×18×18
1×384×18×18
1×18×18×384
1×18×18×384
1×384×18×18
1×768×9×9
1×768×9×9
1×768×9×9
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×3072
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
81×768
81×768
81×3072
81×3072
81×3072
81×3072
81×3072
1×9×9×3072
1×9×9×3072
81×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×1×1×3072
1×1×1×3072
1×1×1×1
1×1×1×1
1×1×1×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×768
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
81×3072
81×3072
81×768
81×768
81×768
81×768
81×768
1×9×9×768
1×9×9×768
81×768
1×9×9×768
1×768×9×9
1×768×9×9
1×768×9×9
1×768×9×9
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×3072
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
81×768
81×768
81×3072
81×3072
81×3072
81×3072
81×3072
1×9×9×3072
1×9×9×3072
81×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×1×1×3072
1×1×1×3072
1×1×1×1
1×1×1×1
1×1×1×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×768
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
81×3072
81×3072
81×768
81×768
81×768
81×768
81×768
1×9×9×768
1×9×9×768
81×768
1×9×9×768
1×768×9×9
1×768×9×9
1×768×9×9
1×768×9×9
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×3072
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
1×9×9×768
81×768
81×768
81×3072
81×3072
81×3072
81×3072
81×3072
1×9×9×3072
1×9×9×3072
81×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×1×1×3072
1×1×1×3072
1×1×1×1
1×1×1×1
1×1×1×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9
1×9×9×1
81×1
1×9×9×1
81×768
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
1×9×9×3072
81×3072
81×3072
81×768
81×768
81×768
81×768
81×768
1×9×9×768
1×9×9×768
81×768
1×9×9×768
1×768×9×9
1×768×9×9
1×768×1×1
768
1×768
1×768
1×768
1×768
1
1
1
1
1
1
1
1
1
1×1
1×1
1×1
1×11160
1×768
1×768
1×768
1×768
1×768
1×768
1×768
1×768
1×11160
1×11160
1×11160
1×11160
1×11160
1×11160
1×11160
1×11160
1×11160
input
float32[1,3,300,300]
SequenceEmpty
_inlfunc__aten_as_strided_onnx_n4
Reshape
float32[96]
data
〈96〉
int64[4]
shape
〈4〉
Reshape
float32[96]
data
〈96〉
int64[4]
shape
〈4〉
Reshape
float32[96]
data
〈96〉
int64[4]
shape
〈4〉
Reshape
float32[192]
data
〈192〉
int64[4]
shape
〈4〉
Reshape
float32[192]
data
〈192〉
int64[4]
shape
〈4〉
Reshape
float32[192]
data
〈192〉
int64[4]
shape
〈4〉
Reshape
float32[192]
data
〈192〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[384]
data
〈384〉
int64[4]
shape
〈4〉
Reshape
float32[768]
data
〈768〉
int64[4]
shape
〈4〉
Reshape
float32[768]
data
〈768〉
int64[4]
shape
〈4〉
Reshape
float32[768]
data
〈768〉
int64[4]
shape
〈4〉
Reshape
float32[768]
data
〈768〉
int64[4]
shape
〈4〉
DynamicQuantizeLinear
input_QuantizeLinear
Reshape
float32[96]
data
〈96〉
int64[4]
shape
〈4〉
loop_body
○
Show Graph
Loop
_inlfunc__aten_as_strided_onnx_n6
int64
M
= 4
〈…〉
body:
loop_body
○
Show Graph
Mul
torch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00165145…
ConvInteger
torch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quant
uint8[96,3,4,4]
w
〈96×3×4×4〉
uint8
w_zero_point
= 117
Add
_inlfunc__aten_as_strided_onnx_n11
int64
B
= 0
Cast
torch_nn_modules_container_Sequential_stem_1_0_stem_0_1_output_quantized_cast
Mul
torch_nn_modules_container_Sequential_stem_1_0_torch_nn_modules_conv_Conv2d_stem_0_1_0_Conv_0_quant_output_scale_mul
Add
torch_nn_modules_container_Sequential_stem_1_0_stem_0_1_bias_add
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_Transpose_0
LayerNormalization
_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_LayerNormalization_1
float32[96]
Scale
〈96〉
float32[96]
B
〈96〉
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_stem_1_1_Transpose_2
DynamicQuantizeLinear
stem_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00480876…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quant
uint8[96,1,7,7]
w
〈96×1×7×7〉
uint8
w_zero_point
= 130
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_getattr_getattr_l__self___stages___0___blocks___0___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_getattr_getattr_l__self___stages___0___blocks___0___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___0___norm_1_2_LayerNormalization_0
float32[96]
Scale
〈96〉
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1__to_copy_1_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quant
uint8[96,384]
B
〈96×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_mm_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_0_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_0_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_0_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_0_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_0_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_0_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_0_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_0_n3
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_aten_addcmul|folded_0_n4
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___0___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1__to_copy_5_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quant
uint8[384,96]
B
〈384×96〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_mm_1_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Mul_89
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Add_93
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___0___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_0_1_Add_5
Transpose
Transpose
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___0___blocks_1_getattr_l__self___stages___0___blocks_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00389496…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quant
uint8[96,1,7,7]
w
〈96×1×7×7〉
uint8
w_zero_point
= 123
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_getattr_getattr_l__self___stages___0___blocks___1___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_getattr_getattr_l__self___stages___0___blocks___1___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___1___norm_1_2_LayerNormalization_0
float32[96]
Scale
〈96〉
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1__to_copy_9_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quant
uint8[96,384]
B
〈96×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_mm_2_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_1_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_1_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_1_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_1_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_1_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_1_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_1_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_1_n3
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_aten_addcmul|folded_1_n4
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___1___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1__to_copy_13_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quant
uint8[384,96]
B
〈384×96〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_mm_3_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Mul_89
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Add_93
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___1___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_1_1_Add_5
Transpose
Transpose_token_0
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___0___blocks_1_getattr_l__self___stages___0___blocks_1_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00309308…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quant
uint8[96,1,7,7]
w
〈96×1×7×7〉
uint8
w_zero_point
= 123
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_getattr_getattr_l__self___stages___0___blocks___2___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___0___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_getattr_getattr_l__self___stages___0___blocks___2___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___0___blocks___2___norm_1_2_LayerNormalization_0
float32[96]
Scale
〈96〉
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1__to_copy_17_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quant
uint8[96,384]
B
〈96×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_mm_4_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_2_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_2_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_2_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_2_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_2_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_2_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_2_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_2_n3
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_aten_addcmul|folded_2_n4
float32[1,1,1,384]
A
〈1×1×1×384〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___0___blocks___2___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Add_63
float32[1,75,75,1]
B
〈1×75×75×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1__to_copy_21_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quant
uint8[384,96]
B
〈384×96〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_mm_5_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Mul_89
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Add_93
float32[96]
B
〈96〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___0___blocks___2___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___0___blocks_2_1_Add_5
LayerNormalization
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___1___downsample_0_1_LayerNormalization_1
float32[96]
Scale
〈96〉
float32[96]
B
〈96〉
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___1___downsample_0_1_Transpose_2
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_getattr_l__self___stages___1___downsample_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quant_scales_mul
float32
B
= 0.00430828…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quant
uint8[192,96,2,2]
w
〈192×96×2×2〉
uint8
w_zero_point
= 139
Cast
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___1___downsample_1_1_1_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_bias_add
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_1_1_getattr_l__self___stages___1___downsample_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00367831…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quant
uint8[192,1,7,7]
w
〈192×1×7×7〉
uint8
w_zero_point
= 129
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_getattr_getattr_l__self___stages___1___blocks___0___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_getattr_getattr_l__self___stages___1___blocks___0___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___0___norm_1_2_LayerNormalization_0
float32[192]
Scale
〈192〉
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1__to_copy_25_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quant
uint8[192,768]
B
〈192×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_mm_6_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_3_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_3_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_3_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_3_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_3_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_3_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_3_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_3_n3
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_aten_addcmul|folded_3_n4
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___0___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1__to_copy_29_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quant
uint8[768,192]
B
〈768×192〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_mm_7_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Mul_89
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Add_93
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___0___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_0_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___blocks_1_getattr_l__self___stages___1___blocks_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00328640…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quant
uint8[192,1,7,7]
w
〈192×1×7×7〉
uint8
w_zero_point
= 132
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_getattr_getattr_l__self___stages___1___blocks___1___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_getattr_getattr_l__self___stages___1___blocks___1___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___1___norm_1_2_LayerNormalization_0
float32[192]
Scale
〈192〉
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1__to_copy_33_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quant
uint8[192,768]
B
〈192×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_mm_8_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_4_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_4_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_4_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_4_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_4_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_4_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_4_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_4_n3
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_aten_addcmul|folded_4_n4
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___1___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1__to_copy_37_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quant
uint8[768,192]
B
〈768×192〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_mm_9_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Mul_89
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Add_93
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___1___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_1_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___1___blocks_1_getattr_l__self___stages___1___blocks_1_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00329827…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quant
uint8[192,1,7,7]
w
〈192×1×7×7〉
uint8
w_zero_point
= 121
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_getattr_getattr_l__self___stages___1___blocks___2___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___1___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_getattr_getattr_l__self___stages___1___blocks___2___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___1___blocks___2___norm_1_2_LayerNormalization_0
float32[192]
Scale
〈192〉
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1__to_copy_41_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quant
uint8[192,768]
B
〈192×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_mm_10_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_5_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_5_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_5_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_5_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_5_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_5_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_5_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_5_n3
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_aten_addcmul|folded_5_n4
float32[1,1,1,768]
A
〈1×1×1×768〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___1___blocks___2___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Add_63
float32[1,37,37,1]
B
〈1×37×37×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1__to_copy_45_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quant
uint8[768,192]
B
〈768×192〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_mm_11_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Mul_89
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Add_93
float32[192]
B
〈192〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___1___blocks___2___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___1___blocks_2_1_Add_5
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_Transpose_0
LayerNormalization
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_LayerNormalization_1
float32[192]
Scale
〈192〉
float32[192]
B
〈192〉
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___2___downsample_0_1_Transpose_2
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_getattr_l__self___stages___2___downsample_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quant_scales_mul
float32
B
= 0.00490160…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quant
uint8[384,192,2,2]
w
〈384×192×2×2〉
uint8
w_zero_point
= 133
Cast
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___2___downsample_1_1_1_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_bias_add
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_2_1_getattr_l__self___stages___2___downsample_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00399155…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 134
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_getattr_getattr_l__self___stages___2___blocks___0___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_getattr_getattr_l__self___stages___2___blocks___0___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___0___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1__to_copy_49_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_mm_12_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_6_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_6_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_6_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_6_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_6_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_6_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_6_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_6_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_6_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___0___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1__to_copy_53_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_mm_13_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___0___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_0_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00353502…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 127
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_getattr_getattr_l__self___stages___2___blocks___1___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_getattr_getattr_l__self___stages___2___blocks___1___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___1___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1__to_copy_57_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_mm_14_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_7_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_7_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_7_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_7_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_7_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_7_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_7_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_7_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_7_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___1___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1__to_copy_61_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_mm_15_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___1___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_1_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_1_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00519265…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 85
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_getattr_getattr_l__self___stages___2___blocks___2___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_getattr_getattr_l__self___stages___2___blocks___2___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___2___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1__to_copy_65_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_mm_16_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_8_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_8_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_8_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_8_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_8_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_8_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_8_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_8_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_8_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___2___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1__to_copy_69_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_mm_17_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___2___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_2_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_2_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00584665…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 186
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_getattr_getattr_l__self___stages___2___blocks___3___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___3___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_getattr_getattr_l__self___stages___2___blocks___3___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___3___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1__to_copy_73_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_mm_18_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_9_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_9_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_9_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_9_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_9_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_9_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_9_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_9_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_9_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___3___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1__to_copy_77_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_mm_19_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___3___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_3_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_3_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00614256…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 105
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_getattr_getattr_l__self___stages___2___blocks___4___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___4___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_getattr_getattr_l__self___stages___2___blocks___4___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___4___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1__to_copy_81_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_mm_20_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_10_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_10_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_10_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_10_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_10_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_10_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_10_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_10_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_10_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___4___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1__to_copy_85_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_mm_21_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___4___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_4_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_4_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00459072…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 157
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_getattr_getattr_l__self___stages___2___blocks___5___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___5___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_getattr_getattr_l__self___stages___2___blocks___5___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___5___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1__to_copy_89_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_mm_22_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_11_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_11_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_11_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_11_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_11_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_11_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_11_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_11_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_11_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___5___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1__to_copy_93_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_mm_23_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___5___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_5_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_5_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00422685…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 99
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_getattr_getattr_l__self___stages___2___blocks___6___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___6___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_getattr_getattr_l__self___stages___2___blocks___6___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___6___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1__to_copy_97_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_mm_24_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_12_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_12_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_12_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_12_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_12_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_12_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_12_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_12_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_12_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___6___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1__to_copy_101_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_mm_25_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___6___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_6_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_6_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00367002…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 128
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_getattr_getattr_l__self___stages___2___blocks___7___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___7___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_getattr_getattr_l__self___stages___2___blocks___7___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___7___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1__to_copy_105_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_mm_26_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_13_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_13_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_13_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_13_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_13_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_13_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_13_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_13_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_13_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___7___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1__to_copy_109_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_mm_27_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___7___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_7_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___2___blocks_1_getattr_l__self___stages___2___blocks_7_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00341320…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quant
uint8[384,1,7,7]
w
〈384×1×7×7〉
uint8
w_zero_point
= 134
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_getattr_getattr_l__self___stages___2___blocks___8___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___2___blocks___8___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_getattr_getattr_l__self___stages___2___blocks___8___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___2___blocks___8___norm_1_2_LayerNormalization_0
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1__to_copy_113_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quant
uint8[384,1536]
B
〈384×1536〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_mm_28_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Mul_89
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Add_93
float32[1536]
B
〈1536〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_14_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_14_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_14_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_14_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_14_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_14_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_14_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_14_n3
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_aten_addcmul|folded_14_n4
float32[1,1,1,1536]
A
〈1×1×1×1536〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___2___blocks___8___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Add_63
float32[1,18,18,1]
B
〈1×18×18×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1__to_copy_117_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quant
uint8[1536,384]
B
〈1536×384〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_mm_29_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Mul_89
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Add_93
float32[384]
B
〈384〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___2___blocks___8___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___2___blocks_8_1_Add_5
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_Transpose_0
LayerNormalization
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_LayerNormalization_1
float32[384]
Scale
〈384〉
float32[384]
B
〈384〉
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_getattr_L__self___stages___3___downsample_0_1_Transpose_2
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_getattr_l__self___stages___3___downsample_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quant_scales_mul
float32
B
= 0.00383704…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quant
uint8[768,384,2,2]
w
〈768×384×2×2〉
uint8
w_zero_point
= 130
Cast
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___downsample_1_0_torch_nn_modules_conv_Conv2d_getattr_L__self___stages___3___downsample_1_1_1_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_bias_add
DynamicQuantizeLinear
_inlfunc_timm_models_convnext_ConvNeXtStage_stages_3_1_getattr_l__self___stages___3___downsample_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00492629…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quant
uint8[768,1,7,7]
w
〈768×1×7×7〉
uint8
w_zero_point
= 117
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_getattr_getattr_l__self___stages___3___blocks___0___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___0___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_getattr_getattr_l__self___stages___3___blocks___0___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___0___norm_1_2_LayerNormalization_0
float32[768]
Scale
〈768〉
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1__to_copy_121_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quant
uint8[768,3072]
B
〈768×3072〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_mm_30_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Mul_89
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Add_93
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_15_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_15_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_15_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_15_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_15_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_15_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_15_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_15_n3
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_aten_addcmul|folded_15_n4
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___0___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1__to_copy_125_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quant
uint8[3072,768]
B
〈3072×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_mm_31_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___0___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_0_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___blocks_1_getattr_l__self___stages___3___blocks_0_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00551574…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quant
uint8[768,1,7,7]
w
〈768×1×7×7〉
uint8
w_zero_point
= 160
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_getattr_getattr_l__self___stages___3___blocks___1___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___1___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_getattr_getattr_l__self___stages___3___blocks___1___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___1___norm_1_2_LayerNormalization_0
float32[768]
Scale
〈768〉
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1__to_copy_129_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quant
uint8[768,3072]
B
〈768×3072〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_mm_32_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Mul_89
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Add_93
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_16_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_16_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_16_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_16_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_16_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_16_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_16_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_16_n3
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_aten_addcmul|folded_16_n4
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___1___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1__to_copy_133_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quant
uint8[3072,768]
B
〈3072×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_mm_33_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___1___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_1_1_Add_5
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_container_Sequential_getattr_L__self___stages___3___blocks_1_getattr_l__self___stages___3___blocks_1_1_QuantizeLinear
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quant_scales_mul
float32
B
= 0.00770069…
ConvInteger
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quant
uint8[768,1,7,7]
w
〈768×1×7×7〉
uint8
w_zero_point
= 175
Cast
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_getattr_getattr_l__self___stages___3___blocks___2___conv_dw_1_output_quantized_cast
Mul
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_torch_nn_modules_conv_Conv2d_getattr_getattr_L__self___stages___3___blocks___2___conv_dw_1_0_Conv_0_quant_output_scale_mul
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_getattr_getattr_l__self___stages___3___blocks___2___conv_dw_1_bias_add
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Transpose_1
LayerNormalization
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_timm_layers_norm_LayerNorm_getattr_getattr_L__self___stages___3___blocks___2___norm_1_2_LayerNormalization_0
float32[768]
Scale
〈768〉
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1__to_copy_137_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quant
uint8[768,3072]
B
〈768×3072〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_mm_34_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Mul_89
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Add_93
float32[3072]
B
〈3072〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc1_1_Reshape_99
int64[4]
shape
〈4〉
Div
_inlfunc__aten_gelu_approximate_none|folded_17_n2
float32
B
= 1.41421353…
Erf
_inlfunc__aten_gelu_approximate_none|folded_17_n3
Add
_inlfunc__aten_gelu_approximate_none|folded_17_n6
float32
B
= 1
Mul
_inlfunc__aten_gelu_approximate_none|folded_17_n7
Mul
_inlfunc__aten_gelu_approximate_none|folded_17_n10
float32
A
= 0.5
Abs
_inlfunc__aten_linalg_vector_norm_onnx|folded_17_n4
ReduceL2
_inlfunc__aten_linalg_vector_norm_onnx|folded_17_n8_n1_n3_n3_n3_n0
int64[2]
axes
〈2〉
ReduceMean
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_ReduceMean_9
int64[1]
axes
〈1〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Add_11
float32
B
= 9.99999997…
Div
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_aten_div_12_n0
Mul
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Mul_19
Mul
_inlfunc_aten_addcmul|folded_17_n3
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_aten_addcmul|folded_17_n4
float32[1,1,1,3072]
A
〈1×1×1×3072〉
Add
_inlfunc_timm_layers_grn_GlobalResponseNorm_getattr_getattr_L__self___stages___3___blocks___2___mlp_grn_1_Add_21
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_23
int64[4]
shape
〈4〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_54
int64[4]
shape
〈4〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Round_62
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Add_63
float32[1,9,9,1]
B
〈1×9×9×1〉
Max
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_72
int64[4]
shape
〈4〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1__to_copy_141_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quant_scales_mul
float32
B
= 1
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quant
uint8[3072,768]
B
〈3072×768〉
uint8
b_zero_point
= 128
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_mm_35_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Mul_89
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_92
int64[4]
shape
〈4〉
Add
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Add_93
float32[768]
B
〈768〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_getattr_getattr_L__self___stages___3___blocks___2___mlp_fc2_1_Reshape_99
int64[4]
shape
〈4〉
Transpose
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Transpose_4
Add
_inlfunc_timm_models_convnext_ConvNeXtBlock_getattr_L__self___stages___3___blocks_2_1_Add_5
ReduceMean
_inlfunc_torch_nn_modules_pooling_AdaptiveAvgPool2d_head_global_pool_pool_1_ReduceMean_5
int64[2]
axes
〈2〉
Reshape
_inlfunc__aten_as_strided_onnx_n8
int64[1]
shape
〈1〉
Gather
_inlfunc__aten_as_strided_onnx_n12
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_Transpose_0
LayerNormalization
_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_LayerNormalization_1
float32[768]
Scale
〈768〉
float32[768]
B
〈768〉
Transpose
_inlfunc_timm_layers_norm_LayerNorm2d_head_norm_1_Transpose_2
Reshape
_inlfunc_timm_layers_classifier_NormMlpClassifierHead_head_1_torch_nn_modules_flatten_Flatten_head_flatten_1_2_Reshape_3
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_23
int64[2]
shape
〈2〉
ReduceMin
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_amin_25_n0
int64[1]
axes
〈1〉
ReduceMax
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_amax_27_n0
int64[1]
axes
〈1〉
Min
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_minimum_32_n0
〈…〉
Max
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_maximum_37_n0
〈…〉
Neg
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_neg_38_n0
Max
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_maximum_39_n0
Div
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_div_41_n0
float32
B
= 127
Max
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Max_48
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_54
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_80
int64[2]
shape
〈2〉
Reciprocal
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_reciprocal_58_n0
Expand
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_aten_expand_82_n2
int64[2]
shape
〈2〉
Mul
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_61
Round
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Round_62
Max
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Max_68
〈…〉
Min
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Min_69
〈…〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_72
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_73
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_77
int64[2]
shape
〈2〉
Cast
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_83
DynamicQuantizeLinear
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1__to_copy_145_QuantizeLinear
Mul
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quant_scales_mul
float32
B
= 0.78823530…
MatMulInteger
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quant
uint8[768,11160]
B
〈768×11160〉
uint8
b_zero_point
= 94
Cast
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_mm_36_output_quantized_cast
Mul
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_MatMul_85_quant_output_scale_mul
Cast
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_86
Cast
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Cast_87
Mul
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_88
Mul
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Mul_89
float32[11160]
B
〈11160〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_92
int64[2]
shape
〈2〉
Add
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Add_93
float32[11160]
B
〈11160〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_96
int64[2]
shape
〈2〉
Reshape
_inlfunc_torch_nn_modules_linear_Linear_head_fc_1_Reshape_99
int64[2]
shape
〈2〉
head_1
float32[1,11160]
Graph Properties
×
main_graph
Inputs
-
name:
input
tensor:
float32[1,3,300,300]
Outputs
-
name:
head_1
tensor:
float32[1,11160]
Metrics
36637969
main_graph
❮
Version
{version}
Copyright ©
Lutz Roeder
Open Model…
.
.
.
OK
≡